37 research outputs found

    Deformable Registration through Learning of Context-Specific Metric Aggregation

    Full text link
    We propose a novel weakly supervised discriminative algorithm for learning context specific registration metrics as a linear combination of conventional similarity measures. Conventional metrics have been extensively used over the past two decades and therefore both their strengths and limitations are known. The challenge is to find the optimal relative weighting (or parameters) of different metrics forming the similarity measure of the registration algorithm. Hand-tuning these parameters would result in sub optimal solutions and quickly become infeasible as the number of metrics increases. Furthermore, such hand-crafted combination can only happen at global scale (entire volume) and therefore will not be able to account for the different tissue properties. We propose a learning algorithm for estimating these parameters locally, conditioned to the data semantic classes. The objective function of our formulation is a special case of non-convex function, difference of convex function, which we optimize using the concave convex procedure. As a proof of concept, we show the impact of our approach on three challenging datasets for different anatomical structures and modalities.Comment: Accepted for publication in the 8th International Workshop on Machine Learning in Medical Imaging (MLMI 2017), in conjunction with MICCAI 201

    The landscape of molecular chaperones across human tissues reveals a layered architecture of core and variable chaperones

    Get PDF
    The sensitivity of the protein-folding environment to chaperone disruption can be highly tissue-specific. Yet, the organization of the chaperone system across physiological human tissues has received little attention. Through computational analyses of large-scale tissue transcriptomes, we unveil that the chaperone system is composed of core elements that are uniformly expressed across tissues, and variable elements that are differentially expressed to fit with tissue-specific requirements. We demonstrate via a proteomic analysis that the muscle-specific signature is functional and conserved. Core chaperones are significantly more abundant across tissues and more important for cell survival than variable chaperones. Together with variable chaperones, they form tissue-specific functional networks. Analysis of human organ development and aging brain transcriptomes reveals that these functional networks are established in development and decline with age. In this work, we expand the known functional organization of de novo versus stress-inducible eukaryotic chaperones into a layered core-variable architecture in multi-cellular organisms

    Robust Multimodal Image Registration Using Deep Recurrent Reinforcement Learning

    Full text link
    The crucial components of a conventional image registration method are the choice of the right feature representations and similarity measures. These two components, although elaborately designed, are somewhat handcrafted using human knowledge. To this end, these two components are tackled in an end-to-end manner via reinforcement learning in this work. Specifically, an artificial agent, which is composed of a combined policy and value network, is trained to adjust the moving image toward the right direction. We train this network using an asynchronous reinforcement learning algorithm, where a customized reward function is also leveraged to encourage robust image registration. This trained network is further incorporated with a lookahead inference to improve the registration capability. The advantage of this algorithm is fully demonstrated by our superior performance on clinical MR and CT image pairs to other state-of-the-art medical image registration methods

    Profiles of US and CT imaging features with a high probability of appendicitis

    Get PDF
    To identify and evaluate profiles of US and CT features associated with acute appendicitis. Consecutive patients presenting with acute abdominal pain at the emergency department were invited to participate in this study. All patients underwent US and CT. Imaging features known to be associated with appendicitis, and an imaging diagnosis were prospectively recorded by two independent radiologists. A final diagnosis was assigned after 6 months. Associations between appendiceal imaging features and a final diagnosis of appendicitis were evaluated with logistic regression analysis. Appendicitis was assigned to 284 of 942 evaluated patients (30%). All evaluated features were associated with appendicitis. Imaging profiles were created after multivariable logistic regression analysis. Of 147 patients with a thickened appendix, local transducer tenderness and peri-appendiceal fat infiltration on US, 139 (95%) had appendicitis. On CT, 119 patients in whom the appendix was completely visualised, thickened, with peri-appendiceal fat infiltration and appendiceal enhancement, 114 had a final diagnosis of appendicitis (96%). When at least two of these essential features were present on US or CT, sensitivity was 92% (95% CI 89-96%) and 96% (95% CI 93-98%), respectively. Most patients with appendicitis can be categorised within a few imaging profiles on US and CT. When two of the essential features are present the diagnosis of appendicitis can be made accuratel

    A variational joint segmentation and registration framework for multimodal images

    Get PDF
    Image segmentation and registration are closely related image processing techniques and often required as simultaneous tasks. In this work, we introduce an optimization-based approach to a joint registration and segmentation model for multimodal images deformation. The model combines an active contour variational term with mutual information (MI) smoothing fitting term and solves in this way the difficulties of simultaneously performed segmentation and registration models for multimodal images. This combination takes into account the image structure boundaries and the movement of the objects, leading in this way to a robust dynamic scheme that links the object boundaries information that changes over time. Comparison of our model with state of art shows that our method leads to more consistent registrations and accurate results

    3-D Inorganic Crystal Structure Generation and Property Prediction via Representation Learning

    Get PDF
    Generative models have been successfully used to synthesize completely novel images, text, music and speech. As such, they present an exciting opportunity for the design of new materials for functional applications. So far, generative deep-learning methods applied to molecular and drug discovery have yet to produce stable and novel 3-D crystal structures across multiple material classes. To that end, we herein present an autoencoder-based generative deep-representation learning pipeline for geometrically optimized 3-D crystal structures that simultaneously predicts the values of eight target properties. The system is highly general, as demonstrated through creation of novel materials from three separate material classes: binary alloys, ternary perovskites and Heusler compounds. Comparison of these generated structures to those optimized via electronic-structure calculations shows that our generated materials are valid and geometrically optimized

    A reference map of the human binary protein interactome.

    Full text link
    Global insights into cellular organization and genome function require comprehensive understanding of the interactome networks that mediate genotype-phenotype relationships(1,2). Here we present a human 'all-by-all' reference interactome map of human binary protein interactions, or 'HuRI'. With approximately 53,000 protein-protein interactions, HuRI has approximately four times as many such interactions as there are high-quality curated interactions from small-scale studies. The integration of HuRI with genome(3), transcriptome(4) and proteome(5) data enables cellular function to be studied within most physiological or pathological cellular contexts. We demonstrate the utility of HuRI in identifying the specific subcellular roles of protein-protein interactions. Inferred tissue-specific networks reveal general principles for the formation of cellular context-specific functions and elucidate potential molecular mechanisms that might underlie tissue-specific phenotypes of Mendelian diseases. HuRI is a systematic proteome-wide reference that links genomic variation to phenotypic outcomes

    Hand Shape Recognition Using a {ToF} Camera : An Application to Sign Language

    No full text
    This master's thesis investigates the benefit of utilizing depth information acquired by a time-of-flight (ToF) camera for hand shape recognition from unrestricted viewpoints. Specifically, we assess the hypothesis that classical 3D content descriptors might be inappropriate for ToF depth images due to the 2.5D nature and noisiness of the data and possible expensive computations in 3D space. Instead, we extend 2D descriptors to make use of the additional semantics of depth images. Our system is based on the appearance-based retrieval paradigm, using a synthetic 3D hand model to generate its database. The system is able to run at interactive frame rates. For increased robustness, no color, intensity, or time coherence information is used. A novel, domain-specific algorithm for segmenting the forearm from the upper body based on reprojecting the acquired geometry into the lateral view is introduced. Moreover, three kinds of descriptors exploiting depth data are proposed and the made design choices are experimentally supported. The whole system is then evaluated on an American sign language fingerspelling dataset. However, the retrieval performance still leaves room for improvements. Several insights and possible reasons are discussed

    Hand Shape Recognition Using a {ToF} Camera : An Application to Sign Language

    No full text
    This master's thesis investigates the benefit of utilizing depth information acquired by a time-of-flight (ToF) camera for hand shape recognition from unrestricted viewpoints. Specifically, we assess the hypothesis that classical 3D content descriptors might be inappropriate for ToF depth images due to the 2.5D nature and noisiness of the data and possible expensive computations in 3D space. Instead, we extend 2D descriptors to make use of the additional semantics of depth images. Our system is based on the appearance-based retrieval paradigm, using a synthetic 3D hand model to generate its database. The system is able to run at interactive frame rates. For increased robustness, no color, intensity, or time coherence information is used. A novel, domain-specific algorithm for segmenting the forearm from the upper body based on reprojecting the acquired geometry into the lateral view is introduced. Moreover, three kinds of descriptors exploiting depth data are proposed and the made design choices are experimentally supported. The whole system is then evaluated on an American sign language fingerspelling dataset. However, the retrieval performance still leaves room for improvements. Several insights and possible reasons are discussed
    corecore